50 research outputs found

    Bringing BCI into everyday life: Motor imagery in a pseudo realistic environment

    Get PDF
    Bringing Brain-Computer Interfaces (BCIs) into everyday life is a challenge because an out-of-lab environment implies the presence of variables that are largely beyond control of the user and the software application. This can severely corrupt signal quality as well as reliability of BCI control. Current BCI technology may fail in this application scenario because of the large amounts of noise, nonstationarity and movement artifacts. In this paper, we systematically investigate the performance of motor imagery BCI in a pseudo realistic environment. In our study 16 participants were asked to perform motor imagery tasks while dealing with different types of distractions such as vibratory stimulations or listening tasks. Our experiments demonstrate that standard BCI procedures are not robust to theses additional sources of noise, implicating that methods which work well in a lab environment, may perform poorly in realistic application scenarios. We discuss several promising research directions to tackle this important problem.BMBF, 01GQ1115, Adaptive Gehirn-Computer-Schnittstellen (BCI) in nichtstationären Umgebunge

    Times Are Changing: Investigating the Pace of Language Change in Diachronic Word Embeddings

    Get PDF
    We propose Word Embedding Networks, a novel method that is able to learn word embeddings of individual data slices while simultaneously aligning and ordering them without feeding temporal information a priori to the model. This gives us the opportunity to analyse the dynamics in word embeddings on a large scale in a purely data-driven manner. In experiments on two different newspaper corpora, the New York Times (English) and die Zeit (German), we were able to show that time actually determines the dynamics of semantic change. However, there is by no means a uniform evolution, but instead times of faster and times of slower change.BMBF, 01IS17058, MALT3 - MAschinelles Lernen-The Tricks of the Trad

    Robust artifactual independent component classification for BCI practitioners

    Get PDF
    Objective. EEG artifacts of non-neural origin can be separated from neural signals by independent component analysis (ICA). It is unclear (1) how robustly recently proposed artifact classifiers transfer to novel users, novel paradigms or changed electrode setups, and (2) how artifact cleaning by a machine learning classifier impacts the performance of brain–computer interfaces (BCIs). Approach. Addressing (1), the robustness of different strategies with respect to the transfer between paradigms and electrode setups of a recently proposed classifier is investigated on offline data from 35 users and 3 EEG paradigms, which contain 6303 expert-labeled components from two ICA and preprocessing variants. Addressing (2), the effect of artifact removal on single-trial BCI classification is estimated on BCI trials from 101 users and 3 paradigms. Main results. We show that (1) the proposed artifact classifier generalizes to completely different EEG paradigms. To obtain similar results under massively reduced electrode setups, a proposed novel strategy improves artifact classification. Addressing (2), ICA artifact cleaning has little influence on average BCI performance when analyzed by state-of-the-art BCI methods. When slow motor-related features are exploited, performance varies strongly between individuals, as artifacts may obstruct relevant neural activity or are inadvertently used for BCI control. Significance. Robustness of the proposed strategies can be reproduced by EEG practitioners as the method is made available as an EEGLAB plug-in.EC/FP7/224631/EU/Tools for Brain-Computer Interaction/TOBIBMBF, 01GQ0850, Verbundprojekt: Bernstein Fokus Neurotechnologie - Nichtinvasive Neurotechnologie für Mensch-Maschine Interaktion - Teilprojekte A1, A3, A4, B4, W3, ZentrumDFG, 194657344, EXC 1086: BrainLinks-BrainTool

    On the Interplay between Fairness and Explainability

    Full text link
    In order to build reliable and trustworthy NLP applications, models need to be both fair across different demographics and explainable. Usually these two objectives, fairness and explainability, are optimized and/or examined independently of each other. Instead, we argue that forthcoming, trustworthy NLP systems should consider both. In this work, we perform a first study to understand how they influence each other: do fair(er) models rely on more plausible rationales? and vice versa. To this end, we conduct experiments on two English multi-class text classification datasets, BIOS and ECtHR, that provide information on gender and nationality, respectively, as well as human-annotated rationales. We fine-tune pre-trained language models with several methods for (i) bias mitigation, which aims to improve fairness; (ii) rationale extraction, which aims to produce plausible explanations. We find that bias mitigation algorithms do not always lead to fairer models. Moreover, we discover that empirical fairness and explainability are orthogonal.Comment: 15 pages (incl Appendix), 4 figures, 8 table

    Loss of nonsense mediated decay suppresses mutations in Saccharomyces cerevisiae TRA1

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Tra1 is an essential protein in <it>Saccharomyces cerevisiae</it>. It was first identified in the SAGA and NuA4 complexes, both with functions in multiple aspects of gene regulation and DNA repair, and recently found in the ASTRA complex. Tra1 belongs to the PIKK family of proteins with a C-terminal PI3K domain followed by a FATC domain. Previously we found that mutation of leucine to alanine at position 3733 in the FATC domain of Tra1 (<it>tra1-L3733A</it>) results in transcriptional changes and slow growth under conditions of stress. To further define the regulatory interactions of Tra1 we isolated extragenic suppressors of the <it>tra1-L3733A </it>allele.</p> <p>Results</p> <p>We screened for suppressors of the ethanol sensitivity caused by <it>tra1-L3733A</it>. Eleven extragenic recessive mutations, belonging to three complementation groups, were identified that partially suppressed a subset of the phenotypes caused by tra<it>1-L3733A</it>. Using whole genome sequencing we identified one of the mutations as an opal mutation at tryptophan 165 of <it>UPF1/NAM7</it>. Partial suppression of the transcriptional defect resulting from <it>tra1-L3733A </it>was observed at <it>GAL10</it>, but not at <it>PHO5</it>. Suppression was due to loss of nonsense mediated decay (NMD) since deletion of any one of the three NMD surveillance components (<it>upf1/nam7, upf2/nmd2</it>, or <it>upf3</it>) mediated the effect. Deletion of <it>upf1 </it>suppressed a second FATC domain mutation, <it>tra1-F3744A</it>, as well as a mutation to the PIK3 domain. In contrast, deletions of SAGA or NuA4 components were not suppressed.</p> <p>Conclusions</p> <p>We have demonstrated a genetic interaction between <it>TRA1 </it>and genes of the NMD pathway. The suppression is specific for mutations in <it>TRA1</it>. Since NMD and Tra1 generally act reciprocally to control gene expression, and the FATC domain mutations do not directly affect NMD, we suggest that suppression occurs as the result of overlap and/or crosstalk in these two broad regulatory networks.</p

    WebQAmGaze: A Multilingual Webcam Eye-Tracking-While-Reading Dataset

    Full text link
    We create WebQAmGaze, a multilingual low-cost eye-tracking-while-reading dataset, designed to support the development of fair and transparent NLP models. WebQAmGaze includes webcam eye-tracking data from 332 participants naturally reading English, Spanish, and German texts. Each participant performs two reading tasks composed of five texts, a normal reading and an information-seeking task. After preprocessing the data, we find that fixations on relevant spans seem to indicate correctness when answering the comprehension questions. Additionally, we perform a comparative analysis of the data collected to high-quality eye-tracking data. The results show a moderate correlation between the features obtained with the webcam-ET compared to those of a commercial ET device. We believe this data can advance webcam-based reading studies and open a way to cheaper and more accessible data collection. WebQAmGaze is useful to learn about the cognitive processes behind question answering (QA) and to apply these insights to computational models of language understanding

    Rather a Nurse than a Physician -- Contrastive Explanations under Investigation

    Full text link
    Contrastive explanations, where one decision is explained in contrast to another, are supposed to be closer to how humans explain a decision than non-contrastive explanations, where the decision is not necessarily referenced to an alternative. This claim has never been empirically validated. We analyze four English text-classification datasets (SST2, DynaSent, BIOS and DBpedia-Animals). We fine-tune and extract explanations from three different models (RoBERTa, GTP-2, and T5), each in three different sizes and apply three post-hoc explainability methods (LRP, GradientxInput, GradNorm). We furthermore collect and release human rationale annotations for a subset of 100 samples from the BIOS dataset for contrastive and non-contrastive settings. A cross-comparison between model-based rationales and human annotations, both in contrastive and non-contrastive settings, yields a high agreement between the two settings for models as well as for humans. Moreover, model-based explanations computed in both settings align equally well with human rationales. Thus, we empirically find that humans do not necessarily explain in a contrastive manner.9 pages, long paper at ACL 2022 proceedings.Comment: 9 pages, long paper at EMNLP 2023 proceeding

    Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models

    Full text link
    Pretrained machine learning models are known to perpetuate and even amplify existing biases in data, which can result in unfair outcomes that ultimately impact user experience. Therefore, it is crucial to understand the mechanisms behind those prejudicial biases to ensure that model performance does not result in discriminatory behaviour toward certain groups or populations. In this work, we define gender bias as our case study. We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models. We investigate the connection, if any, between the two learning stages, and evaluate how bias amplification reflects on model performance. Overall, we find that bias amplification in pretraining and after fine-tuning are independent. We then examine the effect of continued pretraining on gender-neutral data, finding that this reduces group disparities, i.e., promotes fairness, on VQAv2 and retrieval tasks without significantly compromising task performance.Comment: To appear in EMNLP 202

    Motor Imagery Under Distraction— An Open Access BCI Dataset

    Get PDF
    TU Berlin, Open-Access-Mittel – 2020BMBF, 01IS18025A, Verbundprojekt BIFOLD-BBDC: Berlin Institute for the Foundations of Learning and DataBMBF, 01IS18037A, Verbundprojekt BIFOLD-BZML: Berlin Institute for the Foundations of Learning and Dat

    Brain–computer interfacing under distraction: an evaluation study

    Get PDF
    Objective. While motor-imagery based brain–computer interfaces (BCIs) have been studied over many years by now, most of these studies have taken place in controlled lab settings. Bringing BCI technology into everyday life is still one of the main challenges in this field of research. Approach. This paper systematically investigates BCI performance under 6 types of distractions that mimic out-of-lab environments. Main results. We report results of 16 participants and show that the performance of the standard common spatial patterns (CSP) + regularized linear discriminant analysis classification pipeline drops significantly in this 'simulated' out-of-lab setting. We then investigate three methods for improving the performance: (1) artifact removal, (2) ensemble classification, and (3) a 2-step classification approach. While artifact removal does not enhance the BCI performance significantly, both ensemble classification and the 2-step classification combined with CSP significantly improve the performance compared to the standard procedure. Significance. Systematically analyzing out-of-lab scenarios is crucial when bringing BCI into everyday life. Algorithms must be adapted to overcome nonstationary environments in order to tackle real-world challenges.BMBF, 01GQ1115, Adaptive Gehirn-Computer-Schnittstellen (BCI) in nichtstationären Umgebunge
    corecore